70,347 research outputs found

    Enhanced toluene removal using granular activated carbon and a yeast strain candida tropicalis in bubble-column bioreactors

    Get PDF
    The yeast strain Candida tropicalis was used for the biodegradation of gaseous toluene. Toluene was effectively treated by a liquid culture of C. tropicalis in abubble-column bioreactor, and the tolueneremoval efficiency increased with decreasing gas flow rate. However, toluene mass transfer from the gas-to-liquid phase was a major limitation for the uptake of toluene by C. tropicalis. The tolueneremoval efficiency was enhanced when granularactivatedcarbon (GAC) was added as a fluidized material. The GAC fluidized bioreactor demonstrated tolueneremoval efficiencies ranging from 50 to 82% when the inlet toluene loading was varied between 13.1 and 26.9 g/m3/h. The yield value of C. tropicalis ranged from 0.11 to 0.21 g-biomass/g-toluene, which was substantially lower than yield values for bacteria reported in the literature. The maximum elimination capacity determined in the GAC fluidized bioreactor was 172 g/m3/h at atoluene loading of 291 g/m3/h. Transient loading experiments revealed that approximately 50% of the toluene introduced was initially adsorbed onto the GAC during an increased loading period, and then slowly desorbed and became available to the yeast culture. Hence, the fluidized GAC mediated in improving the gas-to-liquid mass transfer of toluene, resulting in a high tolueneremoval capacity. Consequently, the GAC bubble-column bioreactor using the culture of C. tropicalis can be successfully applied for the removal of gaseous toluene

    Consistency test of general relativity from large scale structure of the Universe

    Get PDF
    We construct a consistency test of General Relativity (GR) on cosmological scales. This test enables us to distinguish between the two alternatives to explain the late-time accelerated expansion of the universe, that is, dark energy models based on GR and modified gravity models without dark energy. We derive the consistency relation in GR which is written only in terms of observables - the Hubble parameter, the density perturbations, the peculiar velocities and the lensing potential. The breakdown of this consistency relation implies that the Newton constant which governs large-scale structure is different from that in the background cosmology, which is a typical feature in modified gravity models. We propose a method to perform this test by reconstructing the weak lensing spectrum from measured density perturbations and peculiar velocities. This reconstruction relies on Poisson's equation in GR to convert the density perturbations to the lensing potential. Hence any inconsistency between the reconstructed lensing spectrum and the measured lensing spectrum indicates the failure of GR on cosmological scales. The difficulties in performing this test using actual observations are discussed.Comment: 7 pages, 1 figur

    Removal of mercury (II) from aqueous solution by using rice residues

    Get PDF
    Sorption potential of rice residues for Hg(II) removal from aqueous solution was investigated. Rice husk (RH) and rice straw (RS) were selected and treated with sodium hydroxide (NaOH). The raw and modified adsorbents were characterized by Scanning Electron Microscopy (SEM), Fourier Transform Infrared Spectroscopy (FTIR) and BET surface area measurements. The effects of pH, initial ion concentration, and agitation time on the removal process were studied in batch adsorption experiments. Two simple kinetic models, which are pseudo-first-order and pseudo-second-order, were tested to investigate the adsorption mechanisms. The kinetic data fits to pseudo second order model with correlation coefficients greater than 0.99 for all adsorbents. The equilibrium data fitted well with the Langmuir compared to Freundlich isotherm models. Alkali-treated adsorbent obtained larger surface area and RH-NaOH showed highest adsorption capacity followed by RS-Pure > RH-Pure > RS-NaOH. The maximum removal efficiency obtained by RH-NaOH and RS-Pure was 42 mg/l (80%) at pH 6.5 and with 2 days contact time (for 50 mg/l initial concentration and 25 mg adsorbents)

    Simultaneous Multiple Surface Segmentation Using Deep Learning

    Full text link
    The task of automatically segmenting 3-D surfaces representing boundaries of objects is important for quantitative analysis of volumetric images, and plays a vital role in biomedical image analysis. Recently, graph-based methods with a global optimization property have been developed and optimized for various medical imaging applications. Despite their widespread use, these require human experts to design transformations, image features, surface smoothness priors, and re-design for a different tissue, organ or imaging modality. Here, we propose a Deep Learning based approach for segmentation of the surfaces in volumetric medical images, by learning the essential features and transformations from training data, without any human expert intervention. We employ a regional approach to learn the local surface profiles. The proposed approach was evaluated on simultaneous intraretinal layer segmentation of optical coherence tomography (OCT) images of normal retinas and retinas affected by age related macular degeneration (AMD). The proposed approach was validated on 40 retina OCT volumes including 20 normal and 20 AMD subjects. The experiments showed statistically significant improvement in accuracy for our approach compared to state-of-the-art graph based optimal surface segmentation with convex priors (G-OSC). A single Convolution Neural Network (CNN) was used to learn the surfaces for both normal and diseased images. The mean unsigned surface positioning errors obtained by G-OSC method 2.31 voxels (95% CI 2.02-2.60 voxels) was improved to 1.271.27 voxels (95% CI 1.14-1.40 voxels) using our new approach. On average, our approach takes 94.34 s, requiring 95.35 MB memory, which is much faster than the 2837.46 s and 6.87 GB memory required by the G-OSC method on the same computer system.Comment: 8 page

    Mobility Increases the Data Offloading Ratio in D2D Caching Networks

    Full text link
    Caching at mobile devices, accompanied by device-to-device (D2D) communications, is one promising technique to accommodate the exponentially increasing mobile data traffic. While most previous works ignored user mobility, there are some recent works taking it into account. However, the duration of user contact times has been ignored, making it difficult to explicitly characterize the effect of mobility. In this paper, we adopt the alternating renewal process to model the duration of both the contact and inter-contact times, and investigate how the caching performance is affected by mobility. The data offloading ratio, i.e., the proportion of requested data that can be delivered via D2D links, is taken as the performance metric. We first approximate the distribution of the communication time for a given user by beta distribution through moment matching. With this approximation, an accurate expression of the data offloading ratio is derived. For the homogeneous case where the average contact and inter-contact times of different user pairs are identical, we prove that the data offloading ratio increases with the user moving speed, assuming that the transmission rate remains the same. Simulation results are provided to show the accuracy of the approximate result, and also validate the effect of user mobility.Comment: 6 pages, 5 figures, accepted to IEEE Int. Conf. Commun. (ICC), Paris, France, May 201

    Solar-neutrino reactions on deuteron in effective field theory

    Get PDF
    The cross sections for low-energy neutrino-deuteron reactions are calculated within heavy-baryon chiral perturbation theory employing cut-off regularization scheme. The transition operators are derived up to next-to-next-to-next-to-leading order in the Weinberg counting rules, while the nuclear matrix elements are evaluated using the wave functions generated by a high-quality phenomenological NN potential. With the adoption of the axial-current-four-nucleon coupling constant fixed from the tritium beta decay data, our calculation is free from unknown low-energy constants. Our results exhibit a high degree of stability against different choices of the cutoff parameter, a feature which indicates that, apart from radiative corrections, the uncertainties in the calculated cross sections are less than 1 %.Comment: 12 pages, 3 figures. Error estimation of higher order corrections detaile

    Time-Deformation Modeling Of Stock Returns Directed By Duration Processes

    Get PDF
    This paper presents a new class of time-deformation (or stochastic volatility) models for stock returns sampled in transaction time and directed by a generalized duration process. Stochastic volatility in this model is driven by an observed duration process and a latent autoregressive process. Parameter estimation in the model is carried out by using the method of simulated moments (MSM) due to its analytical feasibility and numerical stability for the proposed model. Simulations are conducted to validate the choices of the moments used in the formulation of the MSM. Both the simulation and empirical results obtained in this paper indicate that this approach works well for the proposed model. The main empirical findings for the IBM transaction return data can be summarized as follows: (i) the return distribution conditional on the duration process is not Gaussian, even though the duration process itself can marginally function as a directing process; (ii) the return process is highly leveraged; (iii) a longer trade duration tends to be associated with a higher return volatility; and (iv) the proposed model is capable of reproducing return whose marginal density function is close to that of the empirical return.Duration process; Ergodicity; Method of simulated moments; Return process; Stationarity.

    Using handheld pocket computers in a wireless telemedicine system

    Get PDF
    Objectives: To see if senior emergency nurse practitioners can provide support to inexperienced ones in a Minor Injuries Unit by using a wireless LAN system of telemedicine transmitting images to a PDA when they were on duty. In addition, whether such a system could be sufficiently accurate to make clinical diagnoses with a high level of diagnostic confidence. This would permit an overall lower grade of nurse to be employed to manage most of the cases as they arrive with a proportionate lowering of costs. Methods: The wireless LAN equipment could roam in the Minor Injuries Unit and the experienced emergency Nurse practitioners could be at home, shopping or even at a considerable distance from the centre. Thirty pictorial images of patients who had been sent to the Review Clinic were transmitted to a PDA various distances of one to sixteen miles from the centre. Two senior emergency nurse practitioners viewed the images and opined on the diagnosis, their degree of confidence in the diagnosis and their opinion of the quality of the image. Results: the images of patients were sharp, clear, and of diagnostic quality. The image quality was only uncertain, as was the level of confidence of the diagnosis if the patient was very dark skinned. Conclusions: The wireless LAN system works with a remote PDA in this clinical situation. However there are question marks over the availability of enough experienced emergency nurse practitioners to staff a service that provides senior cover for longer parts of the day and at weekends
    corecore